RELEASE VERSION = 2.6.5.159
Intel® QuickAssist Technology (Intel® QAT) Driver for VMware ESXi*
================================================================================
This software enables Intel® QuickAssist Technology (Intel® QAT) accelerator
on VMware ESXi via Direct Path I/O*.

Driver enables sharing of a Physical Function across multiple guest Virtual
Machines (VM) using Single Root I/O Virtualization (SR-IOV) technology.
This is accomplished by exposing Intel® QAT accelerator as Virtual Functions
(VFs) to a specific guest VM(s).

This document describes how to install and configure Intel® QAT Host driver
on VMware ESXi hypervisor and how to make Intel® QAT HW available inside guests.
For more details on how to use Intel® QAT from a Virtual Machine,
please refer to the documents mentioned in the "Documentation" section.

Note that this document assumes that the reader is familiar with virtualization
technologies and has some level of familiarity with the VMware ESXi hypervisor.
This document does not explain how to install VMware ESXi, how to install
a virtual machine and how to administrate ESXi using vSphere* Host Client or
VMware vCenter*.
For more details on these topics please refer to VMware's documentation.

*Other names and brands may be claimed as the property of others.


License
=======
Refer to LICENSE.txt in this package for license information before using
this software.


Details/Limitations of this Release
===================================
* This software release is intended for platforms that contain:
  * 4th Generation Intel® Xeon® Scalable Processor (4xxx QAT)
  * 5th Generation Intel® Xeon® Scalable Processor (4xxx QAT)
  * Intel® Xeon® 6 Processor (4xxx QAT)
* ESXi limitation: number of PCI passthrough devices per VM is limited, so
  please check KB article (https://kb.vmware.com/s/article/1003497) for exact
  limits. ESXi will not allow to power on VM if such limit is exceeded.


Documentation
=============
Associated software and collaterals can be found on the Intel® QAT landing page:
https://developer.intel.com/quickassist

Additional documentation includes:
 * Intel® QuickAssist Technology Software for Linux* - Release Notes - Hardware
   Version 2.0
 * Intel® QuickAssist Technology Software for Linux* - Getting Started Guide -
   Hardware Version 2.0
 * Intel® QuickAssist Technology Software for Linux* - Programmer's Guide -
   Hardware Version 2.0
 * Intel® QuickAssist Technology Software for Windows* - Release Notes
 * Intel® QuickAssist Technology Software for Windows* - Technical Guide
 * Intel® QuickAssist Technology API Programmer's Guide
 * Intel® QuickAssist Technology Cryptographic API Reference Manual
 * Intel® QuickAssist Technology Compression API Reference Manual


Installing Intel® QAT VMware Driver
===================================
All possible installation ways are described in KB Article "How to download and
install async drivers in VMware ESXi" (https://kb.vmware.com/s/article/2005205).
Here we cover one via using component aka offline bundle.

1) Open a Secure Shell (SSH) connection to the target ESXi host (SSH should be
   enabled to perform this operation).
2) Copy the component bundle to the ESXi server.
   Technically, you can place the file anywhere that is accessible to the ESXi
   console shell, but for these instructions, we'll assume the location
   is in '/tmp' folder.
   Here's an example of using the Linux `scp` utility to copy the file from
   downloaded location to the remote/target ESXi server located at 10.10.10.10:
    > scp qat-2.0_*.tar.gz root@10.10.10.10:/tmp
3) Extract the package (assuming it has been copied to /tmp folder)
    > cd /tmp
    > tar -xzf qat-2.0_*.tar.gz
4) Install the driver from component:
    > esxcli software component apply --depot /tmp/*Intel-qat*.zip
5) Reboot the system to complete driver installation:
    > reboot
6) If the Intel® QAT driver has been loaded without errors, you should see
   the qat module showing in the list of system modules:
    > esxcfg-module --list | grep qat
    > qat               8    3968


Using SR-IOV VFs
================

Enabling SR-IOV
---------------
1) Login to the target ESXi host via vSphere Host Client.
2) In the left pane, expand "Host" and click on "Manage" section.
3) In the center pane, choose "Hardware" tab and "PCI Devices" sub-tab.
4) Find devices with "Intel Corporation QAT" text in "Description"
   and click on "Configure SR-IOV" button.
5) In dialog window change "Enabled" param to "Yes" and enter the
   desired number of VF in the range between 1 and maximum indicated in window.
6) Click "Save" and ESXi will reconfigure device with requested number of VFs.
   On this step, ESXi may notify you that SR-IOV configuration changes may
   not take effect until the host is restarted. If you don't see VFs
   listed in UI, please follow VMware recommendation and reboot system.
7) Now the Intel® QAT VFs are enabled in the system. Intel® QAT VFs should
   appear in "PCI Devices" list in UI as devices with "QAT VF" in name.
   You can additionally verify same by running the `lspci` command in ESXi
   shell. For example:
    > lspci -vn | grep -E '4941|4943|4945'
    0000:6b:00.1 Class 0b40: 8086:4941
    ....
    0000:e9:00.1 Class 0b40: 8086:4943
    ....

   Note: Actual output depends on the hardware available on the particular
   system.

   At this point the Intel® QAT VFs can be attached to a guest Virtual Machine
   (see section "Pass-through the PCI Device" in this document).


Assign the QAT VF device to VM
------------------------------

Assign the QAT VF device to VM via vSphere Client
-------------------------------------------------
1) Login to the VMware vCenter via vSphere Client.
2) Find target VM in the inventory. Ensure that the VM is powered off.
3) In the center pane, click "Edit Settings..." icon to edit VM configuration.
   A pop-up window with the VM settings will appear.
4) Click on the "ADD NEW DEVICE" menu and choose "PCI Device". A "Device
   Selection" pop-up will appear. In this new window choose desired
   PCI device with "QAT VF" in name. The BDFs listed here will match with the
   output of the `lspci -vn | grep -E '4941|4943|4945'` command.

   Click "SELECT" button to confirm choice. Additional VFs could be added by
   repeating this step.

5) All memory for the VM must be reserved. Expand "Memory" and check that
   "Reservation" section is reporting "All VM memory is reserved for this VM.".
   Otherwise set "Reserve all guest memory (All locked)" checkbox.
6) Some Linux Guest drivers may require to enable IOMMU in VM settings.
   Expand "CPU" and set "Enabled" checkbox against "I/O MMU" setting.
7) Click "OK" to save VM configuration.


Assign the QAT VF device to VM via vSphere Host Client
------------------------------------------------------
1) Login to the target ESXi host via vSphere Host Client.
2) In the left pane, click on VMs.
3) In the center pane, click on the desired Virtual Machine.
   Ensure that the VM is powered off.
4) Click on the "Edit" button to edit the virtual machines settings. A pop-up
   window with the VM settings will appear.
5) Click on "Add other Device" and select "PCI device". The new PCI device will
   be added. By default, it selects the first VF in the system, but it may
   not be a Intel® QAT VF since there could be other PCI devices present in the
   system. To select Intel® QAT VF, click the drop-down list and choose desired
   PCI device with "QAT VF" in name. The BDFs listed here will match with the
   output of the `lspci -vn | grep -E '4941|4943|4945|4947'` command.
   Additional VFs can be added by repeating this step.
6) All memory for the VM must be reserved. Expand "Memory" and set
   "Reserve all guest memory (All locked)" checkbox.
7) Some Linux Guest drivers may require to enable IOMMU in VM settings.
   Expand "CPU" and set "Expose IOMMU to the guest OS" checkbox.
8) Click "Save".

Now you have one or more VFs attached to your guest and VM is ready to be
powered on. Refer to "Installing Intel® QAT Software on the Guest" section
for details about Guest driver.


Installing Intel® QAT Software on the Guest
===========================================
For instructions on how to install the Guest driver inside VM please refer to
corresponding guest driver's collaterals like "Getting Started Guide".


Driver configuration
====================
Intel® QAT Host driver have a configuration options controlled via module
parameters.

Heartbeat
---------
Intel® QAT HW have a feature Heartbeat to detect issues with HW. Driver tracks
HW counters with defined interval and raising error event in case if HW wedged.
User can control intervals of heartbeat checks or disable this feature via
`hb_interval` module parameter.

Rate Limiting
-------------
Intel® QAT HW have a feature to control how HW resources will be shared between
virtualization instances (in case of SR-IOV - VFs). When Rate Limiting (RL) is
disabled, all resources are shared across instances concurrently. When RL is
enabled, all resources are equally divided between instances to provide same
level of performance. User can control this feature via `rl_eq_div` module
parameter.

Each device may have a specific configuration (order of Intel® QAT devices
is matching order of appearance in `lspci` output) - to enable rate limiting on
first and third devices, next array of values for `rl_eq_div` should
be passed - `1,0,1`. If value is not set explicitly, default will be used.

Since division depends on the number of virtualization instances, user also
can control partitioning of resources by changing number of instances.

Services
--------
Intel® QAT HW support few different services. By default, each PF is configured
to provide specific set of services, but this can be changed via module
parameter `srv_mask`.

This table represents what services are available for given HW, default
configuration and how many could be configured simultaneously:

  |  Device   | Available services | Default config | # of services |
  |===========|====================|================|===============|
  | 4XXX QAT  |   ASYM, SYM, DC    |   ASYM + DC    |       2       |

Bit positions of `srv_mask` parameter help in enabling a particular service:

  | Bit position |  7 to 4  |  3   | 2  |  1  |    0     |
  |==============|==========|======|====|=====|==========|
  | Service      | Reserved | ASYM | DC | SYM | Reserved |

For example, to enable ASYM and SYM services, decimal value `10` == (2|8)
should be passed. Each device may have a specific configuration (order of
Intel® QAT devices is matching order of appearance in `lspci` output) -
to enable SYM configuration on first device, ASYM on second, DC on fourth and
default configuration on rest devices, next value for `srv_mask`
should be passed - `2,8,0,4`. Value `0` represents default value. If value is
not set explicitly, default will be used.

Attempt to use incorrect/unsupported configuration for device may prevent
driver from proper initialization and HW may become unavailable until
proper configuration will be used or be reset to default.

Due to fact that with virtualization there are always 2 drivers - Host and
Guest, both should be properly configured. Guest's driver configuration should
match the capabilities provided by Host driver.
More details about Guest driver configuration could be taken from corresponding
documentation like Release Notes or Getting Started Guide.

Compression chaining
--------------------
Compression chaining support could be enabled via module parameter
`chaining_enabled`. Compression chaining requires a special service
configuration and should be enabled only if compression service configured.

This table represents what service configuration are compatible with chaining
for given HW:

  |  Device   | Compatible configuration |
  |===========|==========================|
  | 4XXX QAT  |            DC            |

Each device may have a specific configuration (order of Intel® QAT devices
is matching order of appearance in `lspci` output) - to enable chaining on
fourth device, next value for `chaining_enabled` should be passed - `0,0,0,1`.
Value `0` represents default value. If value is not set explicitly,
default will be used.

Control module parameters
-------------------------
To set module parameters for driver, following command should be executed in
ESXi Shell:
  > esxcli system module parameters set --module qat --parameter-string "param1=value1 param2=value2"

For example, to set configuration for service mask and chaining from previous
examples, next parameter string should be passed - "srv_mask=2,8,0,4 chaining_enabled=0,0,0,1".

To view module parameters with descriptions and previously set values, following
command should be executed in ESXi Shell:
  > esxcli system module parameters list --module qat

To reset module configuration to a default, following command should be executed
in ESXi Shell:
  > esxcli system module parameters clear --module qat

Configuration could be applied only on module initialization, so after updating
device configuration you should restart system or reload driver as described
in "Recover from fatal errors" section of this document to apply desired
configuration.

As mentioned earlier, driver may fail to successfully attach to the device
with incorrect configuration. In such cases please check `/var/log/vmkernel.log`
and adjust configuration or start over.


Intel® Device Manager for VMware* vCenter Server support
========================================================
Intel® QAT driver have support for Intel® Device Manager for VMware* vCenter
Server. Features as Telemetry and dynamic control on Rate Limiting are
available only thru plugin. Please refer to Intel® Device Manager for VMware*
vCenter Server User Guide available by this link for detailed information:
https://www.intel.com/content/www/us/en/developer/articles/guide/device-manager-vc-user-guide.html


Uninstalling Intel® QAT VMware Driver
=====================================
1) Open an SSH connection to the target ESXi host.
2) Uninstall driver component:
    > esxcli software component remove --component Intel-qat
3) Reboot system to complete removal:
    > reboot


Known issues
============
1) By default, VMkernel supports only 1024 interrupt cookies. On system with
   big number of QAT devices and other accelerators, interrupt cookies could be
   exhausted which may lead to various issues and QAT HW will be not available.
   Increase interrupt cookies number to the desired value (up to 4096) to
   support more QAT devices via following command:
       > esxcli system settings kernel set --setting=maxIntrCookies --value=4096


Recover from fatal errors
=========================
In the event of persistent device error state that cannot be recovered
by software, it is recommended to manually reload the PF driver
on the ESXi host or reset the host itself.
The driver will reset and recover the PF device during driver reloading.
Steps to reload the driver:
1) Power off all the VMs that are using Intel® QAT hardware.
2) Execute following commands via SSH on ESXi host to reload the PF driver:
    > esxcfg-module --unload qat
   and signal to ESXi device manager to rediscover HW:
    > kill -HUP $(cat /var/run/vmware/vmkdevmgr.pid)

Note: If driver is used together with Intel® Accelerator Management Daemon for
VMware ESXi*, driver unload may report that "module symbols in use". In this
case please proceed with system reboot or refer to daemon documentation
on how to temporally stop it and try to unload driver again.
